将无人机应用扩展到复杂任务的研究需要稳定的控制框架。最近,在许多研究中,对机器人控制以完成复杂的任务进行了深入的强化学习(RL)算法。不幸的是,由于难以解释博学的政策和缺乏稳定保证,尤其是对于诸如攀岩无人机之类的复杂任务,因此深入的RL算法可能不适合直接部署到现实世界的机器人平台中。本文提出了一种新型的混合体系结构,该结构通过使用无模型的Deep RL算法学习的强大策略来增强名义控制器。所提出的架构采用不确定性感受的控制搅拌机来保留名义控制器的保证稳定性,同时使用学习策略的扩展性能。该政策在模拟环境中进行了数千个域随机化的培训,以实现多样化的不确定性的稳健性能。通过现实世界实验验证了所提出的方法的性能,然后与传统的控制器和经过香草深RL算法训练的基于最新的学习控制器进行了比较。
translated by 谷歌翻译
在本文中,提出了一种新的视觉惯性内径(VIO)的步行 - vio,采用步行运动 - 自适应腿运动约束,其提出了用身体运动改变为四足机器人的定位。四足机器人主要使用VIO,因为它们需要快速定位进行控制和路径规划。但是,由于四足功能机器主要用于室外,因此从天空或地面提取的外来特征导致跟踪故障。此外,Quadruped Robots的行走运动导致摆动,这降低了相机和惯性测量单元(IMU)引起的定位精度。为了克服这些限制,许多研究人员使用VIO与腿运动限制。然而,由于四足机器人的行走运动根据控制器,步态,四足机器人的速度等,因此在添加腿运动限制的过程中应该考虑这些因素。我们提出了通过调整腿运动约束因素来使用的VIO,无论步行运动如何。为了评估Walk-VIO,我们创建和发布二章机器人的数据集,这些机器人在仿真环境中以各种类型的行走运动移动。此外,我们通过与当前最先进的算法进行比较验证了WAWN-VIO的有效性。
translated by 谷歌翻译
Vision Transformer (ViT) extracts the final representation from either class token or an average of all patch tokens, following the architecture of Transformer in Natural Language Processing (NLP) or Convolutional Neural Networks (CNNs) in computer vision. However, studies for the best way of aggregating the patch tokens are still limited to average pooling, while widely-used pooling strategies, such as max and GeM pooling, can be considered. Despite their effectiveness, the existing pooling strategies do not consider the architecture of ViT and the channel-wise difference in the activation maps, aggregating the crucial and trivial channels with the same importance. In this paper, we present Group Generalized Mean (GGeM) pooling as a simple yet powerful pooling strategy for ViT. GGeM divides the channels into groups and computes GeM pooling with a shared pooling parameter per group. As ViT groups the channels via a multi-head attention mechanism, grouping the channels by GGeM leads to lower head-wise dependence while amplifying important channels on the activation maps. Exploiting GGeM shows 0.1%p to 0.7%p performance boosts compared to the baselines and achieves state-of-the-art performance for ViT-Base and ViT-Large models in ImageNet-1K classification task. Moreover, GGeM outperforms the existing pooling strategies on image retrieval and multi-modal representation learning tasks, demonstrating the superiority of GGeM for a variety of tasks. GGeM is a simple algorithm in that only a few lines of code are necessary for implementation.
translated by 谷歌翻译
深度神经网络的合奏表现出了卓越的性能,但是它们的沉重计算成本阻碍将它们应用于资源有限的环境。它激发了从合奏老师的知识到较小的学生网络,并且有两个重要的设计选择,用于这种合奏蒸馏:1)如何构建学生网络,以及2)在培训期间应显示哪些数据。在本文中,我们提出了一种平均水平技术,其中有多个子网的学生经过培训以吸收合奏教师的功能多样性,但是这些子网的适当平均进行推理,提供了一个学生网络,没有额外的推理成本。我们还提出了一种扰动策略,该策略寻求投入,从中可以更好地转移到学生的教师中。结合这两个,我们的方法在以前的各种图像分类任务上的方法上有了显着改进。
translated by 谷歌翻译
最近,学到的图像压缩方法优于传统手工制作的方法,包括BPG。该成功的关键之一是学习的熵模型,该模型估计了量化潜在表示的概率分布。与其他视觉任务一样,最近学习的熵模型基于卷积神经网络(CNN)。但是,CNN由于局部连接性的性质而在建模长期依赖性方面有限制,这在图像压缩中可能是一个重要的瓶颈,其中降低空间冗余是一个关键点。为了克服这个问题,我们提出了一个名为Informand Transformer(Informer)的新型熵模型,该模型使用注意机制以内容依赖性方式利用全球和局部信息。我们的实验表明,告密者可以提高利率 - 对柯达和Tecnick数据集的最先进方法的延伸性能,而没有二次计算复杂性问题。我们的源代码可在https://github.com/naver-ai/informer上获得。
translated by 谷歌翻译
类别不平衡数据的问题在于,由于少数类别的数据缺乏数据,分类器的泛化性能劣化。在本文中,我们提出了一种新的少数民族过度采样方法,通过利用大多数类作为背景图像的丰富背景来增加多元化的少数民族样本。为了使少数民族样本多样化,我们的主要思想是将前景补丁从少数级别粘贴到来自具有富裕环境的多数类的背景图像。我们的方法很简单,可以轻松地与现有的长尾识别方法结合。我们通过广泛的实验和消融研究证明了提出的过采样方法的有效性。如果没有任何架构更改或复杂的算法,我们的方法在各种长尾分类基准上实现了最先进的性能。我们的代码将在链接上公开提供。
translated by 谷歌翻译
变形金刚正在改变计算机视觉的景观,特别是对于识别任务。检测变压器是对象检测的第一个完全结束的学习系统,而视觉变压器是用于图像分类的第一个完全变压器的架构。在本文中,我们集成了视觉和检测变压器(Vidt)以构建有效和高效的物体探测器。 VIDT引入了重新配置的注意模块,将最近的Swin变压器扩展为独立对象检测器,然后是计算高效的变压器解码器,该解码器利用多尺度特征和辅助技术来提高检测性能,而无需多大增加计算负载。 Microsoft Coco基准数据集上的广泛评估结果表明,VIDT在现有的基于变压器的对象检测器中获得了最佳的AP和延迟折衷,并且由于大型型号的高可扩展性而实现了49.2AP。我们将在https://github.com/naver-ai/vidt发布代码和培训的型号
translated by 谷歌翻译
Vision Transformer (ViT) extends the application range of transformers from language processing to computer vision tasks as being an alternative architecture against the existing convolutional neural networks (CNN). Since the transformer-based architecture has been innovative for computer vision modeling, the design convention towards an effective architecture has been less studied yet. From the successful design principles of CNN, we investigate the role of spatial dimension conversion and its effectiveness on transformer-based architecture. We particularly attend to the dimension reduction principle of CNNs; as the depth increases, a conventional CNN increases channel dimension and decreases spatial dimensions. We empirically show that such a spatial dimension reduction is beneficial to a transformer architecture as well, and propose a novel Pooling-based Vision Transformer (PiT) upon the original ViT model. We show that PiT achieves the improved model capability and generalization performance against ViT. Throughout the extensive experiments, we further show PiT outperforms the baseline on several tasks such as image classification, object detection, and robustness evaluation. Source codes and ImageNet models are available at https://github.com/naver-ai/pit.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译